Invited Keynote Talk Modeling Reasoning Mechanisms by Neural-Symbolic Learning

نویسنده

  • Kai-Uwe Kühnberger
چکیده

Currently, neural-symbolic integration covers – at least in theory – a whole bunch of types of reasoning: neural representations (and partially also neural-inspired learning approaches) exist for modeling propositional logic (programs), whole classes of manyvalued logics, modal logic, temporal logic, and epistemic logic, just to mention some important examples [2,4]. Besides these propositional variants of logical theories, also first proposals exist for approximating “infinity” with neural means, in particular, theories of first-order logic. An example is the core method intended to learn the semantics of the single-step operator TP for first-order logic (programs) with a neural network [1]. Another example is the neural approximation of variable-free first-order logic by learning representations of arrow constructions (which represent logical expressions) in the R using Topos constructions [3].

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Statistical Relational Learning - A Logical Approach (Abstract of Invited Talk)

In this talk I will briefly outline and survey some developments in the field of statistical relation learning, especially focussing on logical approaches. Statistical relational learning is a novel research stream within artificial intelligence that combines principles of relational logic, learning and probabilistic models. This endeavor is similar in spirit to the developments in Neural Symbo...

متن کامل

Neurons and Symbols: A Manifesto

We discuss the purpose of neural-symbolic integration including its principles, mechanisms and applications. We outline a cognitive computational model for neural-symbolic integration, position the model in the broader context of multi-agent systems, machine learning and automated reasoning, and list some of the challenges for the area of neural-symbolic computation to achieve the promise of ef...

متن کامل

Invited Talk: Neuromorphic Computing Principles, Achievements, and Potentials

Neural networks have recently taken the field of machine learning by storm. Their success rests upon the availability of high performance computing hardware which allows to train very wide and deep networks. Traditional neural networks have very limited biological realism. Recent work on more brain-like hardware architectures has led to first large-scale implementations of neuromorphic computin...

متن کامل

Invited Talk: Learning probability by comparison

Learning probability by probabilistic modeling is a major task in statistical machine learning and it has traditionally been supported by maximum likelihood estimation applied to generative models or by a local maximizer applied to discriminative models. In this talk, we introduce a third approach, an innovative one that learns probability by comparing probabilistic events. In our approach, we ...

متن کامل

Learning of Human-like Algebraic Reasoning Using Deep Feedforward Neural Networks

There is a wide gap between symbolic reasoning and deep learning. In this research, we explore the possibility of using deep learning to improve symbolic reasoning. Briefly, in a reasoning system, a deep feedforward neural network is used to guide rewriting processes after learning from algebraic reasoning examples produced by humans. To enable the neural network to recognise patterns of algebr...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2008